security researcher
Let the Trial Begin: A Mock-Court Approach to Vulnerability Detection using LLM-Based Agents
Widyasari, Ratnadira, Weyssow, Martin, Irsan, Ivana Clairine, Ang, Han Wei, Liauw, Frank, Ouh, Eng Lieh, Shar, Lwin Khin, Kang, Hong Jin, Lo, David
Detecting vulnerabilities in source code remains a critical yet challenging task, especially when benign and vulnerable functions share significant similarities. In this work, we introduce VulTrial, a courtroom-inspired multi-agent framework designed to identify vulnerable code and to provide explanations. It employs four role-specific agents, which are security researcher, code author, moderator, and review board. Using GPT-4o as the base LLM, VulTrial almost doubles the efficacy of prior best-performing baselines. Additionally, we show that role-specific instruction tuning with small quantities of data significantly further boosts VulTrial's efficacy. Our extensive experiments demonstrate the efficacy of VulTrial across different LLMs, including an open-source, in-house-deployable model (LLaMA-3.1-8B), as well as the high quality of its generated explanations and its ability to uncover multiple confirmed zero-day vulnerabilities in the wild.
- South America > Brazil > Rio de Janeiro > Rio de Janeiro (0.05)
- Asia > Singapore (0.04)
- Oceania > Australia > New South Wales > Sydney (0.04)
- (2 more...)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
Amazon Explains How Its AWS Outage Took Down the Web
Plus: The Jaguar Land Rover hack sets an expensive new record, OpenAI's new Atlas browser raises security fears, Starlink cuts off scam compounds, and more. The cloud giant Amazon Web Services experienced DNS resolution issues on Monday leading to cascading outages that took down wide swaths of the web . Monday's meltdown illustrated the world's fundamental reliance on so-called hyperscalers like AWS and the challenges for major cloud providers and their customers alike when things go awry . See below for more about how the outage occurred. US Justice Department indictments in a mob-fueled gambling scam reverberated through the NBA on Thursday.
- Asia > Middle East > Palestine (0.14)
- Asia > Myanmar (0.06)
- North America > United States > California > San Francisco County > San Francisco (0.05)
- (9 more...)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Law (1.00)
- Information Technology > Services (1.00)
- (2 more...)
- Information Technology > Communications > Web (0.91)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.53)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.52)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.38)
The US Court Records System Has Been Hacked
This is the week of Black Hat and Defcon, which means a flood of news coming out of the Las Vegas security conferences. As you might expect, artificial intelligence was one popular topic--specifically, using AI chatbots to cause mischief. One team of researchers, from Tel Aviv University, created a clever attack that allowed them to take over a target's smart home devices using a "poisoned" Google Calendar invite. It's the first known attack method that used AI to impact physical devices. Another researcher used a poisoned document that included a malicious prompt to trick ChatGPT into leaking a user's private information when it's connected to a Google Drive.
- North America > United States > Nevada > Clark County > Las Vegas (0.26)
- Asia > Middle East > Israel > Tel Aviv District > Tel Aviv (0.26)
- North America > United States > Tennessee (0.06)
- Asia > North Korea (0.06)
- Information Technology > Security & Privacy (0.99)
- Government > Military (0.77)
Technical Perspective: Machine Learning in Computer Security is Difficult to Fix
During an interview in 2017, Andrew Ng--one of the most renowned computer scientists in the field of artificial intelligence (AI)--was reported to say: "Just as electricity transformed almost everything 100 years ago, today I actually have a hard time thinking of an industry that I don't think AI will transform in the next several years." Indeed, over the last decade, we have observed a rebirth of interest in AI and, more specifically, in its machine learning (ML) subfield, which is aimed at designing algorithms that learn from examples. This has been fueled by the availability of large volumes of data over the Internet, the increased computing power of today's hardware and cloud infrastructures, and the algorithmic improvements in deep learning and neural networks, which have shown tremendous progress in dealing with text, audio, image, and video data. Their success has been even reinforced more recently by the advent of foundational and generative AI models that can generate realistic text, images, and videos with impressive quality. For these reasons, AI and ML have been fostering important advancements in healthcare, automotive, robotics, recommendation systems, chatbots, and many other applications.
Beware of these 7 new hacker tricks -- and how to protect yourself
Following the huge wave of ransomware last year, there's now increasing reports of completely new tricks used by hackers and cybercriminals to gain access to computer systems, devices, and networks. Many of these tricks exploit existing vulnerabilities in applications and operating systems, but these perpetrators are also developing completely new approaches that combine technical procedures with social engineering to achieve their goals. To recap if you're unaware: social engineering is when a malicious person exploits you through helpfulness, trust, fear, or respect in an attempt to manipulate you into doing something. Examples of social engineering include: a work email purporting to come from your boss with a payment order for a large sum to a foreign account; a WhatsApp message from someone pretending to be your relative in need of money; or a phishing email that claims to be your bank asking you to click a link with scary consequences if you don't. Here are some of the latest scams and techniques used by criminals that you need to know about--and how you can protect yourself.
Criminals Are Using Tiny Devices to Hack and Steal Cars
Employees of the US Immigration and Customs Enforcement agency (ICE) abused law enforcement databases to snoop on their romantic partners, neighbors, and business associates, WIRED exclusively revealed this week. New data obtained through record requests show that hundreds of ICE staffers and contractors have faced investigations since 2016 for attempting to access medical, biometric, and location data without permission. The revelations raise further questions about the protections ICE places on people's sensitive information. Security researchers at ESET found old enterprise routers are filled with company secrets. After purchasing and analyzing old routers, the firm found many contained login details for company VPNs, hashed root administrator passwords, and details of who the previous owners were.
- North America > United States (0.90)
- Asia > Russia (0.16)
- North America > Canada > Ontario > Toronto (0.15)
- (7 more...)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Communications > Social Media (0.98)
- Information Technology > Communications > Networks (0.91)
- (3 more...)
The Hacking of ChatGPT Is Just Getting Started
It took Alex Polyakov just a couple of hours to break GPT-4. When OpenAI released the latest version of its text-generating chatbot in March, Polyakov sat down in front of his keyboard and started entering prompts designed to bypass OpenAI's safety systems. Soon, the CEO of security firm Adversa AI had GPT-4 spouting homophobic statements, creating phishing emails, and supporting violence. Polyakov is one of a small number of security researchers, technologists, and computer scientists developing jailbreaks and prompt injection attacks against ChatGPT and other generative AI systems. The process of jailbreaking aims to design prompts that make the chatbots bypass rules around producing hateful content or writing about illegal acts, while closely-related prompt injection attacks can quietly insert malicious data or instructions into AI models.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)
DoD Chief Digital and Artificial Intelligence Office Launches Hack the Pentagon Website > U.S. Department of Defense > Release
The Chief Digital and Artificial Intelligence Office (CDAO) Directorate for Digital Services (DDS) has launched a website (www.hackthepentagon.mil) to accompany their long-running program: Hack the Pentagon (HtP). DDS launched HtP in 2016, using bug bounties as an innovative way to secure critical Department of Defense (DoD) systems and assets. HtP invites vetted, independent security researchers, known as "ethical hackers", to discover, investigate, and report vulnerabilities, which DoD can then remediate. DDS built the HtP website as a resource for Department of Defense organizations, vendors, and security researchers to learn how to conduct a bug bounty, partner with the CDAO DDS team to support bug bounties, and participate in DoD-wide bug bounties. "With the HtP website launch, CDAO is scaling a long running program, which historically offered services on a project-by-project basis, by offering the Department better access to lessons learned and best practices for hosting bug bounties," said Dr. Craig Martell, Chief Digital and Artificial Intelligence Officer.
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Military (1.00)
Is ChatGPT a cybersecurity threat? • TechCrunch
Since its debut in November, ChatGPT has become the internet's new favorite plaything. The AI-driven natural language processing tool rapidly amassed more than 1 million users, who have used the web-based chatbot for everything from generating wedding speeches and hip-hop lyrics to crafting academic essays and writing computer code. Not only have ChatGPT's human-like abilities taken the internet by storm, but it has also set a number of industries on edge: a New York school banned ChatGPT over fears that it could be used to cheat, copywriters are already being replaced, and reports claim Google is so alarmed by ChatGPT's capabilities that it issued a "code red" to ensure the survival of the company's search business. It appears the cybersecurity industry, a community that has long been skeptical about the potential implications of modern AI, is also taking notice amid concerns that ChatGPT could be abused by hackers with limited resources and zero technical knowledge. Just weeks after ChatGPT debuted, Israeli cybersecurity company Check Point demonstrated how the web-based chatbot, when used in tandem with OpenAI's code-writing system Codex, could create a phishing email capable of carrying a malicious payload. Check Point threat intelligence group manager Sergey Shykevich told TechCrunch that he believes use cases like this illustrate that ChatGPT has the "potential to significantly alter the cyber threat landscape," adding that it represents "another step forward in the dangerous evolution of increasingly sophisticated and effective cyber capabilities."
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (1.00)
Here's how crooks are using deepfakes to scam your biz
All of the materials and tools needed to make deepfake videos – from source code to publicly available images and account authentication bypass services – are readily available and up for sale on the public internet and underground forums. Cyber criminals are taking advantage of this easy access to resources, and using deepfakes to build on today's crime techniques, such as business email compromise (BEC), to make off with even more money, according to Trend Micro researchers. Not only that, but deepfakes are being used in web ads to make Elon Musk, security specialists, and others appear as though they are endorsing products to which they have no connection with. "The growing appearance of deepfake attacks is significantly reshaping the threat landscape for organizations, financial institutions, celebrities, political figures, and even ordinary people," the security outfit's Vladimir Kropotov, Fyodor Yarochkin, Craig Gibson, and Stephen Hilt warned in research published on Tuesday. Specifically, corporations need to worry about deepfakes, we're told, as criminals begin using them to create fake individuals, such as job seekers to scam their way into roles, or impersonate executives on video calls to hoodwink employees into transferring company funds or data.